Bibliography

185

[83] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Doll´ar, and Ross Girshick.

Masked autoencoders are scalable vision learners. arXiv preprint arXiv:2111.06377,

2021.

[84] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for

image recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision

and Pattern Recognition, pages 770–778, 2016.

[85] Koen Helwegen, James Widdicombe, Lukas Geiger, Zechun Liu, Kwang-Ting Cheng,

and Roeland Nusselder.

Latent weights do not exist: Rethinking binarized neural

network optimization. Advances in neural information processing systems, 32, 2019.

[86] Pedro Hermosilla, Tobias Ritschel, and Timo Ropinski. Total denoising: Unsupervised

learning of 3d point cloud cleaning. In Proceedings of the IEEE/CVF international

conference on computer vision, pages 52–60, 2019.

[87] Geoffrey Hinton, Oriol Vinyals, and JeffDean. Distilling the knowledge in a neural

network. Computer Science, 14(7):38–39, 2015.

[88] Lu Hou, Zhiqi Huang, Lifeng Shang, Xin Jiang, Xiao Chen, and Qun Liu. Dynabert:

Dynamic bert with adaptive width and depth. Advances in Neural Information Pro-

cessing Systems, 33:9782–9793, 2020.

[89] Lu Hou, Zhiqi Huang, Lifeng Shang, Xin Jiang, Xiao Chen, and Qun Liu. Dynabert:

Dynamic bert with adaptive width and depth. In NeurIPs, 2020.

[90] Andrew G Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang,

Tobias Weyand, Marco Andreetto, and Hartwig Adam.

Mobilenets: Efficient

convolutional neural networks for mobile vision applications.

arXiv preprint

arXiv:1704.04861, 2017.

[91] Qinghao Hu, Peisong Wang, and Jian Cheng. From hashing to cnns: Training binary

weight networks via hashing. In Proceedings of the AAAI Conference on Artificial

Intelligence, volume 32, 2018.

[92] Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely

connected convolutional networks. In Proceedings of the IEEE/CVF Conference on

Computer Vision and Pattern Recognition, pages 4700–4708, 2017.

[93] Kun Huang, Bingbing Ni, and Xiaokang Yang. Efficient quantization for neural net-

works with binary weights and low bitwidth activations. In Proceedings of the AAAI

Conference on Artificial Intelligence, volume 33, pages 3854–3861, 2019.

[94] Lianghua Huang, Xin Zhao, and Kaiqi Huang. Got-10k: A large high-diversity bench-

mark for generic object tracking in the wild. IEEE transactions on pattern analysis

and machine intelligence, 43(5):1562–1577, 2019.

[95] Yan Huang, Jingsong Xu, Qiang Wu, Zhedong Zheng, Zhaoxiang Zhang, and Jian

Zhang. Multi-pseudo regularized label for generated data in person re-identification.

IEEE Transactions on Image Processing, 28(3):1391–1403, 2018.

[96] Yanping Huang, Youlong Cheng, Ankur Bapna, Orhan Firat, Dehao Chen, Mia Chen,

HyoukJoong Lee, Jiquan Ngiam, Quoc V Le, Yonghui Wu, et al. Gpipe: Efficient train-

ing of giant neural networks using pipeline parallelism. Advances in neural information

processing systems, 32, 2019.